4 research outputs found

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    The Distributed Virtual Meeting Room Exercise

    No full text
    In this paper, we describe our research on distributed virtual meeting rooms. Starting point is our research on multi-party interaction, where the interaction may take place in real, augmented and virtual environments. Moreover, those that interact may be humans, human-controlled avatars, (semi-)autonomous agents, mobile robots, etc. In this paper the emphasis is on connecting meeting environments and transforming perceived and captured meeting activity into multimedia representations of that activity, including re-generation of activity in virtual reality, and making this re-generation perceivable in different, internet-connected, environments. This re-generation is useful for on-line meeting assistance, remote meeting participation and assistance, and for off-line access to meeting information
    corecore